Bitcoin Forum
April 27, 2024, 08:46:27 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 »  All
  Print  
Author Topic: Proposal: Transaction-Directed Acyclic Graphs  (Read 6181 times)
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 09, 2016, 09:52:30 AM
 #1

I recently stumbled across an old blog post (https://blog.ethereum.org/2014/07/05/stake/) by Vitalik Buterin in which he gave the following simplified version of Daniel Larimer's Transactions as Proof of Stake (TaPoS)(http://bravenewcoin.com/assets/Uploads/TransactionsAsProofOfStake10.pdf):

1. "Every transaction must contain a reference (ie. hash) to the previous transaction"
2. "A candidate state-of-the-system is obtained by calculating the result of a resulting transaction chain"
3. "The correct chain among multiple candidates is the one that has either (i) the longest coin-days-destroyed (ie. number of coins in the account * time since last access), or (ii) the highest transaction fees"

Source: https://blog.ethereum.org/2014/07/05/stake/ (The strike-through was just added here, as I will not discuss coin-days type schemes. Indeed, nothing here will look much like proof of stake, and this is better thought of as an alternative to Bitcoin's final stage in which all mining rewards come from transaction fees.)

I was really struck by the elegant simplicity of this proposal. It also solves the "nothing at stake" problem with PoS, as observed in the article.

Of course in practice, it would not scale at all well, as Vitalik Buterin points out. The question then is whether one could create an algorithm that keeps most of this wonderful simplicity, while improving the scaling. In the rest of this post, I'd like to discuss one possible solution. Any comments on whether this would actually work would be much appreciated, and if someone with more free time than me felt like implementing it, I would be incredibly flattered.

The chief change, is that rather than having a block-chain, or, as with TaPoS, a transaction-chain, we instead have a transaction-directed acyclic graph (transaction-DAG/TDAG), with each transaction containing a list/Merkle-tree of hashes of previous transactions.

If I understand correctly, the chief reason the simple TaPoS given above would not scale is that with high transaction volume, large numbers of transactions will all claim to be descended from a single parent transaction. Thus there will be huge volumes of "orphan-transactions", and probably even quite long orphan-chains. This would mean that it would be almost impossible for low value transactions to get into the chain. However, with transactions forming a DAG, this problem is potentially solvable. While large numbers of transactions may still share a common parent (or parents), providing they do not contradict each other (to be made clear), they may be brought back together by having the next generation of transactions "descend" from all of them.

A TDAG would need to satisfy the following properties to be accepted by the network:

1. The partial order associated with the DAG forms a bounded lattice, so in particular, there is a unique root node, and a unique childless node, which must be the transaction being submitted to the network.
2. Let P(N) be the set of all parents of a node N, and A(N) be the set containing N and all of its ancestor nodes (e.g. parents, grand-parents etc. all the way back to the root). Then for all N in the DAG, and all M1 in P(N), there is at least one node Q in A(M1) such that for all M2 in P(N), Q is not in A(M2). (I.e. a node can only claim a parent if doing so strictly increases its set of ancestors.)
3. For each node N in the DAG, there is a topological ordering of A(N), say M1, M2, ..., MK, with MK=N such that for all k in 1,...,K, the inputs to the transaction contained in Mk are unspent outputs from nodes in M1,...,M(k-1). (I.e. the DAG ought to be at least ex-post rationalizable as representing a chain of transactions.)

For this mechanism to work, people need to be incentivised to include as many childless transactions as the parent of their transaction as possible. They also need to be incentivised to add their transaction as near as possible to the bottom of the DAG. To do this, I propose the following.

1. As with Bitcoin, the difference between the sum of inputs and the sum of outputs on a transaction is interpreted as a transaction fee, which we treat as an unspent output when applying rule 3 above.
2. Whatever happens, half of all transaction fees are permanently discarded.
3. The maker of a new transaction N can include up to one half of the total transaction fees from P(N) as inputs within their transaction (i.e. up to all of the remainder).
4. Where selection between multiple irreconcilable TDAGs needs to be made (i.e. when it is not possible to form a new TDAG with a bottom node whose parents are the bottom nodes of the original TDAGs), the TDAG with the highest sum of total transaction fees paid should be picked. In the case of a tie, of those TDAGs with the same total transaction fees paid, the one with the highest total unclaimed transaction fees should be picked.

To understand the implications of this, suppose there is a node N with children M1, M2,..., MK each with no other parents. If K=1, and the one child has claimed the maximum allowable fee, then the minimum possible amount is discarded, i.e. just half the transaction fee. If they claim less than the maximum possible, then more than one half of the transaction fees will be discarded (although in reality, someone would realise some were still available, and would submit an additional child transaction to claim it). If K=2, and again each child has claimed the maximum allowable fee, then there can be no TDAG containing N, M1 and M2, since such a TDAG would feature spending already spent outputs. Thus, either the makers of nodes M1 and M2 are less greedy in their claims (the likely outcome), or one of these nodes will not be included in the eventual TDAG. If (say) M1 arrived significantly before M2, then they are likely to have more children of their own, and hence a TDAG containing M1  is likely to have higher total transaction fees than one containing M2, and so the one(s) containing M2 would not be selected. Ideally, wallets should dynamically adjust the claimed amount to give a high probability of transaction inclusion. I would then expect that opportunistic people with high network bandwidth would mop up the odd bit of remaining unclaimed transaction fees.

As far as I can see, as with the original TaPoS, this is still not vulnerable to the "nothing at stake" problem with PoS. There is no incentive to submit transactions to multiple TDAGs. What other problems am I missing?

As I said, any comments would be much appreciated. As would pinging any potentially interested parties.

Note: First posted here: https://www.reddit.com/r/btc/comments/4n6nq8/proposal_transactiondirected_acyclic_graphs/ (where it drowned in the noise).
1714250787
Hero Member
*
Offline Offline

Posts: 1714250787

View Profile Personal Message (Offline)

Ignore
1714250787
Reply with quote  #2

1714250787
Report to moderator
Transactions must be included in a block to be properly completed. When you send a transaction, it is broadcast to miners. Miners can then optionally include it in their next blocks. Miners will be more inclined to include your transaction if it has a higher transaction fee.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714250787
Hero Member
*
Offline Offline

Posts: 1714250787

View Profile Personal Message (Offline)

Ignore
1714250787
Reply with quote  #2

1714250787
Report to moderator
1714250787
Hero Member
*
Offline Offline

Posts: 1714250787

View Profile Personal Message (Offline)

Ignore
1714250787
Reply with quote  #2

1714250787
Report to moderator
spartacusrex
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
June 09, 2016, 01:25:12 PM
 #2

yo.

have you checked out DAGcoin.. ?

https://bitcointalk.org/index.php?topic=1177633.0

I think it's what you are proposing..

Life is Code.
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 09, 2016, 02:12:38 PM
 #3

I hadn't heard of that before. It's interesting, but definitely not the same. Quite different structures, incentives and benefits.
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 09, 2016, 03:06:12 PM
Last edit: June 09, 2016, 03:18:15 PM by TomHolden
 #4

It appears to have been developed into Iota: https://bitcointalk.org/index.php?topic=1216479.0 , and their white paper gives further references to other people talking about DAGs. So it seems using a DAG is not novel, though it also means it's not crazy. What hasn't been pursued though is the combination of the simple TaPoS idea with a DAG, as I suggest here.

One advantage of my scheme over this is that my scheme gives stronger incentives for adding to young nodes, both via transaction fees, and thanks to the rule that each parent must strictly expand the set of ancestors. It also doesn't rely on transactions "approving" other transactions to ensure internal consistency. Instead that is a requirement for any valid TDAG.
spartacusrex
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
June 09, 2016, 03:10:22 PM
 #5

Why burn any of the fees at all ?

What am I missing ?


Life is Code.
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 09, 2016, 03:20:54 PM
 #6

Burning fees defends against a double spend attack in which someone secretly constructs an alternative TDAG in which they pay large amounts of transaction fees, then claim them themselves.
iotatoken
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500


View Profile
June 09, 2016, 03:51:32 PM
 #7

This is indeed quite similar to IOTA.

Join us on http://chat.iotatoken.com Smiley

iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
June 10, 2016, 08:14:44 PM
 #8

Two people have PM'ed me this. They always expect AnonyMint to provide some analysis.

The issue seems to the be same as it was for Iota and Sergio's DAG design, which is that there is no way to prevent double-spends on multiple branches of the DAG. We covered this in great detail at the linked thread:

https://bitcointalk.org/index.php?topic=1319681.msg13830403#msg13830403

Iota "solves" this by hoping that all payers and payees run a Monte Carlo algorithm, but the only way to enforce it is to have centralized servers, which is what Iota has at launch.

You I suppose propose to incentivize one longest chain in another way, but game theory will always remain that without proof-of-work blocks then there is no Nash equilibrium resolving to one longest chain rule.

Sorry DAGs don't work. But I am not screaming to tell people not to invest in Iota, because frankly Satoshi's proof-of-work block chain centralizes as well, as is clearly on display.

You may want to quote this post before it is nuked by Hitler.

klee
Legendary
*
Offline Offline

Activity: 1498
Merit: 1000



View Profile
June 10, 2016, 08:23:27 PM
 #9

Two people have PM'ed me this. They always expect AnonyMint to provide some analysis.

The issue seems to the be same as it was for Iota and Sergio's DAG design, which is that there is no way to prevent double-spends on multiple branches of the DAG. We covered this in great detail at the linked thread:

https://bitcointalk.org/index.php?topic=1319681.msg13830403#msg13830403

Iota "solves" this by hoping that all payers and payees run a Monte Carlo algorithm, but the only way to enforce it is to have centralized servers, which is what Iota has at launch.

You I suppose propose to incentivize one longest chain in another way, but game theory will always remain that without proof-of-work blocks then there is no Nash equilibrium resolving to one longest chain rule.

Sorry DAGs don't work. But I am not screaming to tell people not to invest in Iota, because frankly Satoshi's proof-of-work block chain centralizes as well, as is clearly on display.

You may want to quote this post before it is nuked by Hitler.


Anti-Nazi action
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 11, 2016, 01:13:56 PM
 #10

Two people have PM'ed me this. They always expect AnonyMint to provide some analysis.

The issue seems to the be same as it was for Iota and Sergio's DAG design, which is that there is no way to prevent double-spends on multiple branches of the DAG. We covered this in great detail at the linked thread:

https://bitcointalk.org/index.php?topic=1319681.msg13830403#msg13830403

Iota "solves" this by hoping that all payers and payees run a Monte Carlo algorithm, but the only way to enforce it is to have centralized servers, which is what Iota has at launch.

You I suppose propose to incentivize one longest chain in another way, but game theory will always remain that without proof-of-work blocks then there is no Nash equilibrium resolving to one longest chain rule.

Apologies for the belated reply, and thanks for seriously engaging on this.

The linked post is conditioned on there not being a longest chain rule. But here, there is an equivalent to the longest chain rule, namely the "heaviest TDAG", where weight is the sum of total transaction fees, with the sum of spent transaction fees used for tie-breaking.

So let's analyse a double spend in more detail. Suppose a user broadcasts two alternate transactions, G and B. For simplicity, suppose that both transactions have the same parents, claim the same amount of the parents' transaction fees, use the same inputs, and pay the same transaction fees. They differ in that transaction G is a payment to a merchant for some good, whereas transaction B sends the coins to an alternative address owned by the same user.

In the proposal here, the two transactions can never be included in the same TDAG. Remember a TDAG is a bounded lattice, so there is some node that is a (...(great-)grand-)child of every other node, namely the transaction being submitted. Remember also that a node cannot descend from two parents if doing so would mean there was no serialized history without a double-spend. Because of this, valid TDAGs cannot contain double spends anywhere.

Thus, any subsequent transaction can only descend from either transaction G or from transaction B, else it will not be considered valid by the network. If clients obey the rules proposed above, by coincidence, one of the transactions G or B would arrive slightly earlier, and would rapidly obtain many child transactions. Soon then the network would settle on this one transaction, thanks to the "heaviest TDAG" selection rule, just as with Bitcoin. Even if B is eventually selected, the merchant should not lose anything, since, just as with Bitcoin, they ought to wait until the G transaction was sufficiently deep before sending goods.

But to play devil's advocate, suppose the network permanently forks, with one TDAG including G and one TDAG including B. And suppose that some users override their client to ignore the "heaviest TDAG" selection rule so as to maintain these forks. Do users have an incentive to submit transactions on both alternate TDAGs? I.e. is there any reason why one fork would not rapidly die even with some some users ignoring the heaviest TDAG selection rule? Well, suppose I am transacting with a merchant. If the merchant is "naive" in that they use the default client, without overriding the "heaviest TDAG" rule, then I certainly do not; rather, my incentive is to submit the transaction to the fork on which the merchant is on (which will be the heaviest TDAG), and not to submit the transaction on the other fork. If the other eventually fork wins, this means I will have obtained the good for free, without even paying transaction fees (the latter is why I strictly prefer to submit no transaction on the other fork than to submit a transaction to myself). This is essentially the argument of V. Buterin linked from the OP. So if an agent is transacting with someone obeying the "heaviest TDAG" rule, then they will strictly prefer to only submit transactions to the heaviest TDAG, further reinforcing that TDAG. What about if the merchant understands that the network has forked? Well then in that case, the merchant might require the transaction to be included on both forks, which would prolong the forks' lives. But is this reasonable in the long-term? For this to be maintained, every user must have downloaded an alternative version of the client which ignores the heaviest TDAG rule. As long as there are some users using a client that respects the rule, the initially heaviest fork will grow faster than the other one, and at some point, even the "awkward" users ought to forget about the other fork. Note too that one could maintain alternative Bitcoin chains if one started by assuming that everyone had modified their code to ignore the longest chain rule, so this is really a perverse hypothetical.

You might argue though that the maker of the original transaction that split the network might obtain a benefit from maintaining the split. In particular, in a classic double spend type attack, they would publicly build on G, sending transactions to their own addresses. This would eventually convince the merchant that G is sufficiently confirmed that they could send the goods. Meanwhile, in private, the maker of the original transaction would be building on B, with the intention of revealing their TDAG once the goods had been dispatched. For the network to later switch from the G fork to the B fork, the B fork would have to have a higher quantity of total transaction fees. Now, the maker of the original transaction can reduce the cost of this for themselves by including any public transactions that do not descend from G, as well as by claiming the maximum allowed transaction fees from their prior transactions. But neither of these approaches are particularly effective. Firstly, once G has been observed by a client, all the client's future transactions will descend from G. So there are unlikely to be many transactions on the good fork that do not descend from G. Secondly, by assumption, half of all transaction fees must remain unclaimed in a valid TDAG. This means that the maker of the original transaction must permanently lose at least half of all the transaction fees on the good fork of transactions that descend from G. Given that any given transaction is small compared to the network as a whole, this will be a huge amount. (And this gives the complete answer to spartacusrex's question previously. I clearly should have spelt out these details to start with.)

Hence, this "attack" leads to a simple confirmation acceptance rule for merchants that will guarantee that the attack is non-profitable: A transaction is considered confirmed when the sum of transaction fees of all transactions descending from the given transaction is more than twice the value of the original transaction.

So, iamnotback, I hope this answers your objections. I confess that the details of Iota remain rather murky to me (it doesn't seem to have managed to preserve the simplicity of TaPoS), but if it's correct that they are vulnerable to the problem you mentioned, and I am not, then thanks are in order for pointing this out to me.

Best,

Tom
Come-from-Beyond
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
June 11, 2016, 03:55:17 PM
 #11

The issue seems to the be same as it was for Iota and Sergio's DAG design, which is that there is no way to prevent double-spends on multiple branches of the DAG.

You can't just claim that there is no a way to prevent double-spends. A blockchain is a special case of a DAG and if you are right then there must be a line after which everything breaks, because (as we see in Bitcoin) the special case of a DAG works pretty well. If this line is drawn between "single branch" and "multiple branches" then how does Ethereum manage to function?
tonych
Legendary
*
Offline Offline

Activity: 964
Merit: 1008


View Profile WWW
June 14, 2016, 07:06:15 PM
 #12

I spent some time on DAG based designs and this one reminds me of something that I rejected early.

As far I understand, the level of security against double-spends basically depends on the total amount spent on fees on the legit DAG.  If so, an attacker just secretly builds a subDAG that includes a few double-spends, spends more on fees in the subDAG than was spent in the legit DAG, then publishes his subDAG.  Now everybody should switch to the subDAG as it has burnt more fees.  As long as the total amount of double-spends exceeds the fees, the attack succeeds.

Simplicity is beauty
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 14, 2016, 09:54:15 PM
 #13

Tonych suggests a double spend attack in which:

As long as the total amount of double-spends exceeds the fees, the attack succeeds.

However, as I stated in the more extensive discussion of double spend attacks above:

A transaction is considered confirmed when the sum of transaction fees of all transactions descending from the given transaction is more than twice the value of the original transaction.

This rules out the kind of double spend attack tonych considered. By the time the transaction is confirmed, at which point the attacker would like to publish their secret TDAG, it is already unprofitable for them to do this.

Providing individual transactions are small relative to the network, this will not be an infeasibly high barrier for confirmation.
tonych
Legendary
*
Offline Offline

Activity: 964
Merit: 1008


View Profile WWW
June 14, 2016, 10:23:19 PM
 #14

Tom, the trick is in doing many double-spend attacks at the same time.  Each individual victim has a relatively small amount at stake, she will wait for relatively short time until as you say total fees = twice the value of the victim's transaction but she doesn't know that the attacker is profiting from several victims, therefore her analysis of the attacker's ROI is wrong.

The attacker has a sort of leverage in that he has to construct a shadow DAG and pay its fees only once but will profit from multiple unsuspecting and uncoordinated victims.

Simplicity is beauty
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 14, 2016, 11:09:33 PM
 #15

Smart, sorry, I'd missed that from your original post. But it seems like V. Buterin's argument from the link in the OP still applies:

"This seems weak, but in reality it isn’t; we know that in the case of Bitcoin, once the currency supply stops increasing mining will rely solely on transaction fees, and the mechanics are exactly the same (since the amount that the network will spend on mining will roughly correspond to the total number of txfees being sent in); hence, fee-based TaPoS is in this regard at least as secure as fee-only PoW mining."

(The premise of this argument is a standard "free entry" condition. Entry continues until returns are driven down to zero.)

I will have to think if an improved confirmation condition robust to multiple double spends could be thought up.
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 15, 2016, 11:11:12 AM
 #16

I will have to think if an improved confirmation condition robust to multiple double spends could be thought up.

It is easy to track the total spend on transaction fees of descendants of the original transaction. A confirmation test of equivalent robustness to the Bitcoin 6 confirmation tests would wait until the total spend on transaction fees of descendants was higher than the total expenditure of Bitcoin miners in mining 6 blocks at present (including electricity, premises rent and amortized costs of equipment). This is a large number, but so too are total transaction fees if the coin is actually being used.
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 15, 2016, 02:43:33 PM
 #17

While waiting for usage to be sufficiently high to get security by transaction fees, it is easy to introduce a PoW component (which also solves the initial distribution problem).

Rather than completely destroying half of all transaction fees, they would instead be set aside to be claimed by future PoW transactions. PoW transactions would be added to the TDAG much like any other transaction, but would contain a PoW instead of inputs. Additionally, PoW transactions would be restricted to only descend from TDAG-nodes with unclaimed transaction fees from the "burnt" half, to ensure that miners have an incentive to include recent transactions.
Come-from-Beyond
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
June 15, 2016, 02:54:45 PM
 #18

which also solves the initial distribution problem

How does it solve the initial distribution?
tonych
Legendary
*
Offline Offline

Activity: 964
Merit: 1008


View Profile WWW
June 15, 2016, 03:54:11 PM
 #19

I will have to think if an improved confirmation condition robust to multiple double spends could be thought up.

It is easy to track the total spend on transaction fees of descendants of the original transaction. A confirmation test of equivalent robustness to the Bitcoin 6 confirmation tests would wait until the total spend on transaction fees of descendants was higher than the total expenditure of Bitcoin miners in mining 6 blocks at present (including electricity, premises rent and amortized costs of equipment). This is a large number, but so too are total transaction fees if the coin is actually being used.

As you increase confirmation time, you further inconvenience all users, while the attacker only has to find more victims.

Also, you assume total transaction fees is something large when the coin is actively used, but I can't see (or I overlooked it) how you incentivize each individual user to pay higher fees.

Simplicity is beauty
TomHolden (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
June 15, 2016, 10:57:21 PM
 #20

How does it solve the initial distribution?

Just because there's a PoW component (initially at least), which produces new coins. You might not like mining, but it's established enough that few would seriously object to using it for distribution. (Though it's of course always preferable if the PoW is GPU/ASIC resistant.)
Pages: [1] 2 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!